Národní úložiště šedé literatury Nalezeno 2 záznamů.  Hledání trvalo 0.01 vteřin. 
Improving Robustness of Neural Networks against Adversarial Examples
Gaňo, Martin ; Matyáš, Jiří (oponent) ; Češka, Milan (vedoucí práce)
This work discusses adversarial attacks to image classifier neural network models. Our goal is to summarize and demonstrate adversarial methods to show that they pose a serious issue in machine learning. The important contribution of this work is the implementation of a tool for training a robust model against adversarial examples. Our approach is to minimize maximization the loss function of the target model. Related work and our own experiments leads us to use Projected gradient descent as a target attack, therefore, we train against data generated by Projected gradient descent. As a result using the framework, we can achieve accuracy more than 90% against sophisticated adversarial attacks.
Improving Robustness of Neural Networks against Adversarial Examples
Gaňo, Martin ; Matyáš, Jiří (oponent) ; Češka, Milan (vedoucí práce)
This work discusses adversarial attacks to image classifier neural network models. Our goal is to summarize and demonstrate adversarial methods to show that they pose a serious issue in machine learning. The important contribution of this work is the implementation of a tool for training a robust model against adversarial examples. Our approach is to minimize maximization the loss function of the target model. Related work and our own experiments leads us to use Projected gradient descent as a target attack, therefore, we train against data generated by Projected gradient descent. As a result using the framework, we can achieve accuracy more than 90% against sophisticated adversarial attacks.

Chcete být upozorněni, pokud se objeví nové záznamy odpovídající tomuto dotazu?
Přihlásit se k odběru RSS.